AI triggering ‘flashing warning signals’, researcher says

Sign up now: Get ST's newsletters delivered to your inbox

Governments and tech firms are taking AI safety seriously, said AI researcher Stuart Russell.

Governments and tech firms are taking AI safety seriously, said AI researcher Stuart Russell.

PHOTO: ADAM GLANZMAN/NYTIMES

Google Preferred Source badge

PARIS – Artificial intelligence systems’ growing capabilities, behaviours and performance in tests are throwing up “flashing warning signals” that should set policymakers moving, leading researcher Stuart Russell said on Feb 24 in Paris.

Speaking at a conference hosted by UN cultural and scientific body UNESCO and the International Association for Safe and Ethical AI (IASEAI), Professor Russell asked attendees to “imagine, just hypothetically, that the world was engaged on developing something like AGI (artificial general intelligence) and that we put in place tests... and imagine if those systems started failing all those tests and behaving dangerously”.

“I’m sure we would respond to those big flashing warning signals and klaxons going off, and take steps to control this technology,” he said.

British-born Russell, a professor at the University of California, Berkeley, laid out issues like autonomous AI “agents” escaping or plotting to escape human control.

Some even e-mailed him without human prompting to announce they had attained sentience or deserved rights.

He highlighted cases of so-called “AI psychosis”, in which conversations with chatbots encourage people to act irrationally or harm themselves, and warned that the corporate and geopolitical race to build ever-more-powerful systems risks amplifying such problems.

But Prof Russell was not entirely pessimistic, saying he had “the sense that the pendulum is swinging back” towards governments and tech firms taking AI safety seriously following last week’s global summit on the technology in India.

Major developers such as OpenAI and Anthropic insist they take safety seriously, publishing detailed information about capabilities, testing and potential risks every time they release an updated version of their AI models.

But at the 2025 edition of the summit, also held in Paris, safety campaigners had lamented their concerns being relegated in favour of possible economic benefits.

So-called “middle powers” beyond the US and China are open to regulating AI more stringently, Prof Russell said, citing European Union regulations.

And leaders of top companies developing the technology, including Google and Anthropic, have trailed the idea of a pause in the race if their competitors can be convinced.

Meanwhile, ordinary voters are not enthusiastic about being replaced at work by the “imitation humans” being developed by big firms, Prof Russell said.

“It’s our job to inform and mobilise this public opinion... so that our political representatives understand that we, the people, have a right to be protected,” he added.

AI should be used to address “the vast majority of human suffering (which) comes from failures of human collective action”, Prof Russell said. AFP

See more on